2,538 research outputs found

    Determinism and inevitability

    Get PDF
    In Freedom Evolves, Dan Dennett embarks on his second book-length attempt to lay to rest the deep metaphysical concerns that many philosophers have expressed about the possibility of human freedom.One of his main objectives in the earlier chapters of the book is to make determinism appear less threatening to our prospects for free agency than it has sometimes seemed, by attempting to show that a deterministic universe would not necessarily be a universe of which it could truly be said that everything that occurs in it is inevitable. In this paper, I want to consider Dennett’s striking argument for this conclusion in some detail. I shall begin by suggesting that on its most natural interpretation, the argument is vulnerable to a serious objection. I shall then develop a second interpretation which is more promising than the first, but will argue that without placing more weight on etymological considerations than they can really bear, it can deliver, at best, only a significantly qualified version of the conclusion that Dennett is seeking. However, although I shall be arguing that his central argument fails, it is also part of the purpose of this paper to build on what I regard as some rather insightful and suggestive material which is developed by Dennett in the course of elaborating his views. His own development of these ideas is hampered, so I shall argue, by a framework for thinking about possibility that is too crude to accommodate the immense subtlety and complexity which is exhibited by the workings of the modal verb ‘can’ and its past tense form, ‘could’; and also, I believe, by the mistaken conviction, on Dennett’s part, that any naturalistically respectable solution to the problem of free will would have to be of a compatibilist stripe. I shall attempt, in the second half of the paper, to explain what seems to me to be wrong with the framework, and to make some points about the functioning of ‘can’ and ‘could’, which I believe any adequate replacement for Dennett’s framework must respect. Ironically, though, I shall argue that it is the rejection of Dennett’s own framework which holds the key to understanding how to defend the spirit (if not the letter) of his thoughts about the invulnerability of our ordinary modal thinking to alleged threats from determinism

    Personal and sub-personal: a defence of Dennett's early distinction

    Get PDF
    Since 1969, when Dennett introduced a distinction between personal and sub‐personal levels of explanation, many philosophers have used ‘sub‐personal’ very loosely, and Dennett himself has abandoned a view of the personal level as genuinely autonomous. I recommend a position in which Dennett's original distinction is crucial, by arguing that the phenomenon called mental causation is on view only at the properly personal level. If one retains the commit‐’ ments incurred by Dennett's early distinction, then one has a satisfactory anti‐physicalistic, anti‐dualist philosophy of mind. It neither interferes with the projects of sub‐personal psychology, nor encourages ; instrumentalism at the personal level. People lose sight of Dennett’s personal/sub-personal distinction because they free it from its philosophical moorings. A distinction that serves a philosophical purpose is typically rooted in doctrine; it cannot be lifted out of context and continue to do its work. So I shall start from Dennett’s distinction as I read it in its original context. And when I speak of ‘the distinction’, I mean to point not only towards the terms that Dennett first used to define it but also towards the philosophical setting within which its work was cut out

    Beyond persons: extending the personal / subpersonal distinction to non-rational animals and artificial agents

    No full text
    The distinction between personal level explanations and subpersonal ones has been subject to much debate in philosophy. We understand it as one between explanations that focus on an agent’s interaction with its environment, and explanations that focus on the physical or computational enabling conditions of such an interaction. The distinction, understood this way, is necessary for a complete account of any agent, rational or not, biological or artificial. In particular, we review some recent research in Artificial Life that pretends to do completely without the distinction, while using agent-centered concepts all the way. It is argued that the rejection of agent level explanations in favour of mechanistic ones is due to an unmotivated need to choose among representationalism and eliminativism. The dilemma is a false one if the possibility of a radical form of externalism is considered

    Artificial Brains and Hybrid Minds

    Get PDF
    The paper develops two related thought experiments exploring variations on an ‘animat’ theme. Animats are hybrid devices with both artificial and biological components. Traditionally, ‘components’ have been construed in concrete terms, as physical parts or constituent material structures. Many fascinating issues arise within this context of hybrid physical organization. However, within the context of functional/computational theories of mentality, demarcations based purely on material structure are unduly narrow. It is abstract functional structure which does the key work in characterizing the respective ‘components’ of thinking systems, while the ‘stuff’ of material implementation is of secondary importance. Thus the paper extends the received animat paradigm, and investigates some intriguing consequences of expanding the conception of bio-machine hybrids to include abstract functional and semantic structure. In particular, the thought experiments consider cases of mind-machine merger where there is no physical Brain-Machine Interface: indeed, the material human body and brain have been removed from the picture altogether. The first experiment illustrates some intrinsic theoretical difficulties in attempting to replicate the human mind in an alternative material medium, while the second reveals some deep conceptual problems in attempting to create a form of truly Artificial General Intelligence

    The evolution of misbelief

    Get PDF
    From an evolutionary standpoint, a default presumption is that true beliefs are adaptive and misbeliefs maladaptive. But if humans are biologically engineered to appraise the world accurately and to form true beliefs, how are we to explain the routine exceptions to this rule? How can we account for mistaken beliefs, bizarre delusions, and instances of self-deception? We explore this question in some detail. We begin by articulating a distinction between two general types of misbelief: those resulting from a breakdown in the normal functioning of the belief formation system (e.g., delusions) and those arising in the normal course of that system's operations (e.g., beliefs based on incomplete or inaccurate information). The former are instances of biological dysfunction or pathology, reflecting "culpable” limitations of evolutionary design. Although the latter category includes undesirable (but tolerable) by-products of "forgivably” limited design, our quarry is a contentious subclass of this category: misbeliefs best conceived as design features. Such misbeliefs, unlike occasional lucky falsehoods, would have been systematically adaptive in the evolutionary past. Such misbeliefs, furthermore, would not be reducible to judicious - but doxastically1 noncommittal - action policies. Finally, such misbeliefs would have been adaptive in themselves, constituting more than mere by-products of adaptively biased misbelief-producing systems. We explore a range of potential candidates for evolved misbelief, and conclude that, of those surveyed, only positive illusions meet our criteri

    The evolution of misbelief

    Full text link

    Relativistic and slowing down: the flow in the hotspots of powerful radio galaxies and quasars

    Full text link
    Pairs of radio emitting jets with lengths up to several hundred kiloparsecs emanate from the central region (the `core') of radio loud active galaxies. In the most powerful of them, these jets terminate in the `hotspots', compact high brightness regions, where the jet flow collides with the intergalactic medium (IGM). Although it has long been established that in their inner (∌\simparsec) regions these jet flows are relativistic, it is still not clear if they remain so at their largest (hundreds of kiloparsec) scales. We argue that the X-ray, optical and radio data of the hotspots, despite their at-first-sight disparate properties, can be unified in a scheme involving a relativistic flow upstream of the hotspot that decelerates to the sub-relativistic speed of its inferred advance through the IGM and viewed at different angles to its direction of motion. This scheme, besides providing an account of the hotspot spectral properties with jet orientation, it also suggests that the large-scale jets remain relativistic all the way to the hotspots.Comment: to appear in ApJ

    The expressive stance: intentionality, expression, and machine art

    Get PDF
    This paper proposes a new interpretive stance for interpreting artistic works and performances that is relevant to artificial intelligence research but also has broader implications. Termed the expressive stance, this stance makes intelligible a critical distinction between present-day machine art and human art, but allows for the possibility that future machine art could find a place alongside our own. The expressive stance is elaborated as a response to Daniel Dennett's notion of the intentional stance, which is critically examined with respect to his specialized concept of rationality. The paper also shows that temporal scale implicitly serves to select between different modes of explanation in prominent theories of intentionality. It also considers the implications of the phenomenological background for systems that produce art

    “Free Will and Affirmation: Assessing Honderich’s Third Way”

    Get PDF
    In the third and final part of his A Theory of Determinism (TD) Ted Honderich addresses the fundamental question concerning “the consequences of determinism.” The critical question he aims to answer is what follows if determinism is true? This question is, of course, intimately bound up with the problem of free will and, in particular, with the question of whether or not the truth of determinism is compatible or incompatible with the sort of freedom required for moral responsibility. It is Honderich’s aim to provide a solution to “the problem of the consequences of determinism” and a key element of this is his articulation and defence of an alternative response to the implications of determinism that collapses the familiar Compatibilist/Incompatibilist dichotomy. Honderich offers us a third way – the response of “Affirmation” (HFY 125-6). Although his account of Affirmation has application and relevance to issues and features beyond freedom and responsibility, my primary concern in this essay will be to examine Honderich’s theory of “Affirmation” as it concerns the free will problem

    Machine Learning and Irresponsible Inference: Morally Assessing the Training Data for Image Recognition Systems

    Get PDF
    Just as humans can draw conclusions responsibly or irresponsibly, so too can computers. Machine learning systems that have been trained on data sets that include irresponsible judgments are likely to yield irresponsible predictions as outputs. In this paper I focus on a particular kind of inference a computer system might make: identification of the intentions with which a person acted on the basis of photographic evidence. Such inferences are liable to be morally objectionable, because of a way in which they are presumptuous. After elaborating this moral concern, I explore the possibility that carefully procuring the training data for image recognition systems could ensure that the systems avoid the problem. The lesson of this paper extends beyond just the particular case of image recognition systems and the challenge of responsibly identifying a person’s intentions. Reflection on this particular case demonstrates the importance (as well as the difficulty) of evaluating machine learning systems and their training data from the standpoint of moral considerations that are not encompassed by ordinary assessments of predictive accuracy
    • 

    corecore